540 research outputs found

    Elementary Derivative Tasks and Neural Net Multiscale Analysis of Tasks

    Full text link
    Neural nets are known to be universal approximators. In particular, formal neurons implementing wavelets have been shown to build nets able to approximate any multidimensional task. Such very specialized formal neurons may be, however, difficult to obtain biologically and/or industrially. In this paper we relax the constraint of a strict ``Fourier analysis'' of tasks. Rather, we use a finite number of more realistic formal neurons implementing elementary tasks such as ``window'' or ``Mexican hat'' responses, with adjustable widths. This is shown to provide a reasonably efficient, practical and robust, multifrequency analysis. A training algorithm, optimizing the task with respect to the widths of the responses, reveals two distinct training modes. The first mode induces some of the formal neurons to become identical, hence promotes ``derivative tasks''. The other mode keeps the formal neurons distinct.Comment: latex neurondlt.tex, 7 files, 6 figures, 9 pages [SPhT-T01/064], submitted to Phys. Rev.

    Hyperplane Neural Codes and the Polar Complex

    Full text link
    Hyperplane codes are a class of convex codes that arise as the output of a one layer feed-forward neural network. Here we establish several natural properties of stable hyperplane codes in terms of the {\it polar complex} of the code, a simplicial complex associated to any combinatorial code. We prove that the polar complex of a stable hyperplane code is shellable and show that most currently known properties of the hyperplane codes follow from the shellability of the appropriate polar complex.Comment: 23 pages, 5 figures. To appear in Proceedings of the Abel Symposiu

    An Arbitrary Two-qubit Computation In 23 Elementary Gates

    Get PDF
    Quantum circuits currently constitute a dominant model for quantum computation. Our work addresses the problem of constructing quantum circuits to implement an arbitrary given quantum computation, in the special case of two qubits. We pursue circuits without ancilla qubits and as small a number of elementary quantum gates as possible. Our lower bound for worst-case optimal two-qubit circuits calls for at least 17 gates: 15 one-qubit rotations and 2 CNOTs. To this end, we constructively prove a worst-case upper bound of 23 elementary gates, of which at most 4 (CNOT) entail multi-qubit interactions. Our analysis shows that synthesis algorithms suggested in previous work, although more general, entail much larger quantum circuits than ours in the special case of two qubits. One such algorithm has a worst case of 61 gates of which 18 may be CNOTs. Our techniques rely on the KAK decomposition from Lie theory as well as the polar and spectral (symmetric Shur) matrix decompositions from numerical analysis and operator theory. They are related to the canonical decomposition of a two-qubit gate with respect to the ``magic basis'' of phase-shifted Bell states, published previously. We further extend this decomposition in terms of elementary gates for quantum computation.Comment: 18 pages, 7 figures. Version 2 gives correct credits for the GQC "quantum compiler". Version 3 adds justification for our choice of elementary gates and adds a comparison with classical library-less logic synthesis. It adds acknowledgements and a new reference, adds full details about the 8-gate decomposition of topC-V and stealthily fixes several minor inaccuracies. NOTE: Using a new technique, we recently improved the lower bound to 18 gates and (tada!) found a circuit decomposition that requires 18 gates or less. This work will appear as a separate manuscrip

    Spectral high resolution feature selection for retrieval of combustion temperature profiles

    Get PDF
    Proceeding of: 7th International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2006 (Burgos, Spain, September 20-23, 2006)The use of high spectral resolution measurements to obtain a retrieval of certain physical properties related with the radiative transfer of energy leads a priori to a better accuracy. But this improvement in accuracy is not easy to achieve due to the great amount of data which makes difficult any treatment over it and it's redundancies. To solve this problem, a pick selection based on principal component analysis has been adopted in order to make the mandatory feature selection over the different channels. In this paper, the capability to retrieve the temperature profile in a combustion environment using neural networks jointly with this spectral high resolution feature selection method is studied.Publicad

    Efficient decomposition of quantum gates

    Full text link
    Optimal implementation of quantum gates is crucial for designing a quantum computer. We consider the matrix representation of an arbitrary multiqubit gate. By ordering the basis vectors using the Gray code, we construct the quantum circuit which is optimal in the sense of fully controlled single-qubit gates and yet is equivalent with the multiqubit gate. In the second step of the optimization, superfluous control bits are eliminated, which eventually results in a smaller total number of the elementary gates. In our scheme the number of controlled NOT gates is O(4n)O(4^n) which coincides with the theoretical lower bound.Comment: 4 pages, 2 figure

    Storage capacity of correlated perceptrons

    Full text link
    We consider an ensemble of KK single-layer perceptrons exposed to random inputs and investigate the conditions under which the couplings of these perceptrons can be chosen such that prescribed correlations between the outputs occur. A general formalism is introduced using a multi-perceptron costfunction that allows to determine the maximal number of random inputs as a function of the desired values of the correlations. Replica-symmetric results for K=2K=2 and K=3K=3 are compared with properties of two-layer networks of tree-structure and fixed Boolean function between hidden units and output. The results show which correlations in the hidden layer of multi-layer neural networks are crucial for the value of the storage capacity.Comment: 16 pages, Latex2

    Multilayer neural networks with extensively many hidden units

    Full text link
    The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter the storage capacity if found to scale with the logarithm of the number of implementable Boolean functions. The generalization behaviour is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones.Comment: 4 pages, 2 figure

    Classification Of Breast Lesions Using Artificial Neural Network.

    Get PDF
    This paper presents a study on classification of breast lesions using artificial neural network. Thirteen morphological features have been extracted from breast lesion cells and used as the neural network inputs for the classification

    Transients and asymptotics of natural gradient learning

    Get PDF
    We analyse natural gradient learning in a two-layer feed-forward neural network using a statistical mechanics framework which is appropriate for large input dimension. We find significant improvement over standard gradient descent in both the transient and asymptotic phases of learning
    corecore